TEAM-ID:
TEAM-NAME:
YOUR-ID:
YOUR-NAME:
(Although you work in groups, both the students have to submit to Moodle, hence there's name field above)
Zip a folder of the following:
Project-1.ipynb, the folders misc and dataset. Rest of the files asked in the questions below must be generated when i run the code. If generation of any file is computationally intensive, add filename_backup.extension where filename.extension is the expected name of file when i run the code. (For example, next point.)opt_backup.g2o (described below) in outermost directory. Here, opt.g2o is the expected name of the file when I run the code../misc folder.On Moodle, all you have to submit is the jupyter notebook. But make sure to call the necessary functions explicitly (as specified in the notebook). The name of the zipped file being submitted to Moodle Assignment portal MUST BE ID_Teamname_Firstname. More details here.
On GitHub classrooms, the latest commit before the deadline will be considered as the submission.
The deadline is Oct 16, 23:55 IST. Please get started ASAP, there is no way you can finish this project during the last few days before the deadline.
This ipython notebook (.ipynb) on GitHub is self-sufficient and has all the information you need to get started with the assignment, you don't need any corresponding PDF doc. Just fire up the notebook and get going!
General information like installation instructions in supplementary notebook "Project-1_Code-Walkthrough". Please take a look at it before you start this assignment.
Whenever I mention some func() below, I am referring to the "helper functions" in another supplementary notebook "Project-1_Code-Walkthrough" provided to you. Whenever I ask you to insert image below, it is better to save the image in misc and load it using  instead of directly pasting.
[[CP-]] refers to CheckPoint, you have to ensure you do the tasks at each of the [[CP-]] places below. Not ensuring [[CP-B]] (CheckPoint-Basic) will incur heavy penalty and potentially 0 for that sub-section, and [[CP-M]] (CheckPoint-Marks) has a particular mark weightage depending on your results at that particular CP.
If you face any issues related to coding/installation, please raise an issue here. For any conceptual doubts, you can ask on Moodle or Teams as usual.
In this project, we are going to use a non-linear weighted least squares optimization approach to solve the problem of getting a better estimate of our robot's trajectory. Least squares formulations are widely used for optimization, be it computer vision or robotics or machine learning. We will dive deep into it during this project and you will have complete clarity on optimization for vector-valued residual functions.
In this "Introduction" section, I am going to provide an introduction for SLAM problem for a robot operating in the 2D world. It is 2. section in this Project. The 1D SLAM problem (1.) is far much simpler to understand and will be described directly in the 1. section.
In a 2D world, a robot has 3 degrees of freedom, i.e. its pose in the world can be expressed by the state vector $\mathbf{x}=(x, y, \theta)^{\mathrm{T}}$. For the scope of this project, we are interested only in the robot's trajectory through the $2 \mathrm{D}$ world, and NOT in distinct landmarks or the surronding map of the environment, i.e. we are only interested in "L"ocalization part of SLAM.
Therefore, we can represent it as a graph where the vertices represent robot poses $\mathbf{x}_{i}$ and edges represent the spatial constraints between these poses. Such a map is generally called a pose graph.
Two different kinds of constraints are necessary for pose graph SLAM. The first are odometric constraints that connect two successive states $\mathbf{x}_{i}$ and $\mathbf{x}_{i+1}$ via a motion model. Furthermore, in order to perform loop closing, the robot has to recognize places it already visited before. This place recognition is also a part of the front-end and provides the second type of constraint, the loop closure constraints. These constraints connect two not necessarily successive poses $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$.
(Source: Sunderhauf 2012)
You will start from the inaccurate pose graph with odometry and loop closure information and by the end of this Project, you end up with an optimized pose graph (see above images) which should look close to ground truth trajectory. You can watch this video to get an intuition for what we're about to do.
Okay, that's enough of theory. Let's get out hands dirty with the code!
import matplotlib.pyplot as plt
import math
import os
import jax.numpy as jnp #see supplementary notebook to see how to use this
import jax
import numpy as np
# If you're `importing numpy as np` for debugging purposes,
# while submitting, please remove 'import numpy' and replace all np's with jnp's.(more in supplementary notebook)
plt.style.use('seaborn')
A solved example for 1D SLAM which optimizes for pose variables using weighted least squares method (Gauss Newton) has been explained in the class. It has been made available here. Your first task is to code this from scratch. [[CP-M]]
For this section, you have to calculate Jacobian analytically yourself and use it. However, you can check how correct jax's jacobian. Its usage is explained in the supplementary notebook.
##############################################################################
# TODO: Code for Section 1 #
odometer = np.array([1.1, 1, 1.1, -2.7, 0])
pos = np.array([0, 1.1, 2.1, 3.2, 0.5])
weights = np.array([[100, 0, 0, 0, 0, 0],
[0, 100, 0, 0, 0, 0],
[0, 0, 100, 0, 0, 0],
[0, 0, 0, 100, 0, 0],
[0, 0, 0, 0, 100, 0],
[0, 0, 0, 0, 0, 1000],
])
iterations = 5
for i in range(iterations):
fx = np.array([ pos[0] + odometer[0], pos[1] + odometer[1], pos[2] + odometer[2], pos[3] + odometer[3], pos[0] + odometer[4], pos[0]])
J = np.array([[1, -1, 0, 0, 0],
[0, 1, -1, 0, 0],
[0, 0, 1, -1, 0],
[0, 0, 0, 1, -1],
[1, 0, 0, 0, -1],
[1, 0, 0, 0, 0],
])
# H @ delta_x = -b
H = J.T @ weights @ J
fx_i = np.array([ fx[0] - pos[1], fx[1] - pos[2], fx[2] - pos[3], fx[3] - pos[4], fx[4] - pos[4], fx[5] - 0])
b = J.T @ weights.T @ fx_i
delta_x = -b @ np.linalg.inv(H)
print('\n')
print("Iteration", i)
print('\n')
print(delta_x)
pos = pos + delta_x
print(pos)
##############################################################################
# END OF YOUR CODE #
##############################################################################
Things are about to get interesting!
A robot is travelling in a oval trajectory. It is equipped with wheel odometry for odometry information and RGBD sensors for loop closure information. Due to noise in wheel odometry it generates a noisy estimate of the trajectory. Our task is to use loop closure pairs to correct the drift.
We pose this problem as a graph optimization problem. In our graph, poses are the vertices and constraints are the edges.
In practical scenarios, we'd obtain the following from our sensors after some post-processing:
You have been given a text file named edges.txt which has all the above 3 and it follows G2O's format (as explained in class, link here ).
Even the loop closure nodes are related by the above model, except that it need not necessarily be consecutive notes k and k+1.
Save this initial trajectory as edges-poses.g2o.
If you plot the initialized poses using odometry information, you need to get as the right plot [[CP-M]] below (this is the "noisy trajectory"): (Left one is the ground truth)
(Use draw() helper function or g2o_viewer or EVO)
Use LM algorithm. Regarding Jacobian calculation, you can use jax's jacobian as part of your main code. However, you still have to separately calculate it analytically and verify if it matches with jax's jacobian using [[CP-M]] frobenius norm frobNorm()). Calculation and verification is compulsory, but it is your choice to use whichever as part of your optimization. Use whichever is faster.
Regarding LM iterations, stopping criterion, information matrix values.
[[CP-B]] As your iterations proceed, you have to print relevant information (iteration number and error value: $F = \frac{1}{2} \mathbf{f}^{\top} \mathbf{\Omega} \mathbf{f} $ (notion page link) at every step).
[[CP-B]] You have to show the plots (ground truth, noisy & optimized: all 3 in a single plot) at every 10 steps or so.
[[CP-M]] You could start with information values of 500 for odom edges, 700 for loop closure edges, 1000 for anchor edge (same for all dimensions). However, you have to heavily experiment with these values. (Given that you somehow know loop closure information is way more reliable than odometry.). At the end of your experimentation, your error $F = \frac{1}{2} \mathbf{f}^{\top} \mathbf{\Omega} \mathbf{f} $ should by < 40. Explain your experimentation in detail using tables/plots etc if necessary.
Do not worry if you're not getting a perfect trajectory. Our parametrization was oversimplified for the sake of this project. With that being said, it is possible to get the error down to < 40 and make it at least look like an oval shaped trajectory, even if it doesn't perfectly resemble the ground truth. However, using g2o (next section), you will be getting a close to ground truth trajectory.
##############################################################################
# TODO: Code for Section 2.1 #
##############################################################################
# CONSTANTS
NUM_VARS = 3
WEIGHT_ODOMETRY = 200
WEIGHT_LOOP_CLOSURE = 500
WEIGHT_ANCHOR = 1000
LM_LAMBDA = 1
NUM_ITERS = 100
TOL = 1e-6
# UTILITY FUNCTIONS
def read_vertex(fileName):
f = open(fileName, 'r')
A = f.readlines()
f.close()
x_arr = []
y_arr = []
theta_arr = []
for line in A:
if "VERTEX_SE2" in line:
(ver, ind, x, y, theta) = line.split()
x_arr.append(float(x))
y_arr.append(float(y))
theta_arr.append(float(theta.rstrip('\n')))
return jnp.array([x_arr, y_arr, theta_arr])
def read_edge(fileName):
f = open(fileName, 'r')
A = f.readlines()
f.close()
ind1_arr = []
ind2_arr = []
del_x = []
del_y = []
del_theta = []
for line in A:
if "EDGE_SE2" in line:
(edge, ind1, ind2, dx, dy, dtheta, _, _, _, _, _, _) = line.split()
ind1_arr.append(int(ind1))
ind2_arr.append(int(ind2))
del_x.append(float(dx))
del_y.append(float(dy))
del_theta.append(float(dtheta))
return (jnp.array( ind1_arr), jnp.array(ind2_arr), jnp.array(del_x), jnp.array(del_y), jnp.array(del_theta))
# plot trajectory using matplotlib
def draw(X, Y, THETA):
ax = plt.subplot(111)
ax.plot(X, Y, 'ro')
plt.plot(X, Y, 'c-')
for i in range(len(THETA)):
x2 = 0.25*math.cos(THETA[i]) + X[i]
y2 = 0.25*math.sin(THETA[i]) + Y[i]
plt.plot([X[i], x2], [Y[i], y2], 'g->')
plt.show()
# motion model
def next_pose(x, y, theta, del_x, del_y, del_theta):
x_new = x + del_x * jnp.cos(theta) - del_y * jnp.sin(theta)
y_new = y + del_y * jnp.cos(theta) + del_x * jnp.sin(theta)
theta_new = theta + del_theta
return jnp.array([x_new, y_new, theta_new])
def write_poses(poses,file_name):
with open(file_name, 'w') as f:
for i in range(poses.shape[0]):
st = "VERTEX_SE2 " + str(i) + " " + str(poses[i][0]) + " " + str(poses[i][1]) + " " + str(poses[i][2])
f.write("%s\n" % st)
# write edges and FIX
f2 = open('edges.txt', 'r')
A = f2.readlines()
f2.close()
for line in A:
if "VERTEX_SE2" not in line:
f.write("%s" % line)
# READ INPUT FILE AND GENERATE POSES
vertex = read_vertex('edges.txt')
edges = read_edge('edges.txt')
x = vertex[0][0]
y = vertex[1][0]
theta = vertex[2][0]
poses = []
poses.append(jnp.array([x, y, theta]))
for i in range(edges[0].shape[0]):
ind1 = edges[0][i]
ind2 = edges[1][i]
if(abs(ind2 - ind1) > 1):
break # do not include loop closure constrains
del_x = edges[2][i]
del_y = edges[3][i]
del_theta = edges[4][i]
x, y, theta = next_pose(x, y, theta, del_x, del_y, del_theta) # use odometry to get the next pose
poses.append(jnp.array([x, y, theta]))
poses = jnp.array(poses)
anchor = poses[0, :]
print('Poses (Vertices):', poses.shape)
draw(poses[:,0], poses[:,1], poses[:,2])
write_poses(poses, 'edges-poses.g2o')
##############################################################################
# END OF YOUR CODE #
##############################################################################

# KEY FUNCTIONS
# Order of edges:
## x_anchor
## y_anchor
## theta_anchor
## for every edge:
### x_i
### y_i
### theta_i
# Order of pose variables
## for every edge:
### x_i
### y_i
### theta_i
# WRITE ALL VERTICES IN g2o FORMAT
def derivative_with_initial(x_i, y_i, theta_i, x_f, y_f, theta_f, del_x, del_y, del_theta):
result = jnp.eye(NUM_VARS)
result = jax.ops.index_update(result, jax.ops.index[0, 2], - del_x * jnp.sin(theta_i) - del_y * jnp.cos(theta_i))
result = jax.ops.index_update(result, jax.ops.index[1, 2], - del_y * jnp.sin(theta_i) + del_x * jnp.cos(theta_i))
return result
def derivative_with_final(x_i, y_i, theta_i, x_f, y_f, theta_f, del_x, del_y, del_theta):
result = -1 * jnp.eye(NUM_VARS)
return result
def information_matrix(edges, weight_odom, weight_loop, weight_anchor):
# edge x edge matrix
diagonal = []
# anchor edge
diagonal.extend([weight_anchor] * NUM_VARS)
for i in range(edges[0].shape[0]):
ind1 = edges[0][i]
ind2 = edges[1][i]
if abs(ind2 - ind1) == 1:
# odom edges
diagonal.extend([weight_odom] * NUM_VARS)
else:
# loop closure edges
diagonal.extend([weight_loop] * NUM_VARS)
return jnp.diag(jnp.array(diagonal))
def residual(current_poses, edges, anchor):
residue = []
# anchor edges
residue.extend(list(current_poses[0] - anchor))
# other edges
for i in range(edges[0].shape[0]):
ind1 = edges[0][i]
ind2 = edges[1][i]
del_x = edges[2][i]
del_y = edges[3][i]
del_theta = edges[4][i]
estimated_pose_2 = next_pose(*current_poses[ind1], del_x, del_y, del_theta)
diff = list(estimated_pose_2 - current_poses[ind2])
residue.extend(diff)
return jnp.asarray(residue)
def jacobian(current_poses, edges):
num_constraints = (edges[0].shape[0] * NUM_VARS) + NUM_VARS # for anchor edges
num_variables = current_poses.shape[0] * NUM_VARS
# constraints are ordered as in `edges`
# variables are ordered as in `current_poses`
J = jnp.zeros((num_constraints, num_variables))
# for anchor edges
J = jax.ops.index_update(J, jax.ops.index[:NUM_VARS, :NUM_VARS], np.eye(NUM_VARS))
# for other edges
for i in range(edges[0].shape[0]):
ind1 = edges[0][i]
ind2 = edges[1][i]
del_x = edges[2][i]
del_y = edges[3][i]
del_theta = edges[4][i]
J = jax.ops.index_update(
J,
jax.ops.index[NUM_VARS + i * NUM_VARS:NUM_VARS + i * NUM_VARS + NUM_VARS, ind1 * NUM_VARS: ind1 * NUM_VARS + NUM_VARS],
derivative_with_initial(*poses[ind1], *poses[ind2], del_x, del_y, del_theta)
)
J = jax.ops.index_update(
J,
jax.ops.index[NUM_VARS + i * NUM_VARS:NUM_VARS + i * NUM_VARS + NUM_VARS, ind2 * NUM_VARS: ind2 * NUM_VARS + NUM_VARS],
derivative_with_final(*poses[ind1], *poses[ind2], del_x, del_y, del_theta)
)
return J
def get_error(poses, edges, weights, anchor):
f = residual(poses, edges, anchor)
return 0.5 * f.T @ weights @ f
def step_lm(poses, edges, weights, anchor, lamda):
f = residual(poses, edges, anchor)
J = jacobian(poses, edges)
step = - jnp.linalg.inv(J.T @ weights @ J + lamda * jnp.eye(J.shape[1])) @ J.T @ weights.T @ f
return step.reshape((-1, NUM_VARS))
get_jacobian = jax.jacfwd(residual, argnums=0)
J_theirs = get_jacobian(poses, edges, anchor)
J_theirs = J_theirs.reshape((420, -1, 1))[:, :, 0]
J_ours = jacobian(poses, edges)
def frobNorm(P1, P2, str1="mat1", str2="mat2"):
jnp.set_printoptions(suppress=True)
val = jnp.linalg.norm(P1 - P2, 'fro')
print(f"Frobenius norm between {str1} and {str2} is: {val}")
print('Jacobian using jax auto differentiation ("theirs") has shape', J_theirs.shape)
print('Jacobian calculated analytically ("ours") has shape', J_ours.shape)
print(frobNorm(J_theirs, J_ours, 'theirs', 'ours'))
fig, ax = plt.subplots(1, 2)
ax[0].imshow(J_theirs)
ax[0].set_title('Theirs')
ax[1].imshow(J_ours)
ax[1].set_title('Ours')
plt.show()
# plot trajectory using matplotlib
def draw_traj(X, Y, THETA):
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(25, 6))
gt_poses = read_vertex('gt.txt')
gt_poses = gt_poses.T
# Plot Ground truth
ax[0].set_title("Ground Truth")
ax[0].plot(gt_poses[:,0], gt_poses[:,1], 'ro')
ax[0].plot(gt_poses[:,0], gt_poses[:,1], 'c-')
for i in range(len(THETA)):
x2 = 0.25*math.cos(gt_poses[i][3]) + gt_poses[i][0]
y2 = 0.25*math.sin(gt_poses[i][3]) + gt_poses[i][1]
ax[0].plot([gt_poses[i][0], x2], [gt_poses[i][1], y2], 'g->')
# Plot Initial estimate
ax[1].set_title("Initial Estimate")
ax[1].plot(poses[:,0], poses[:,1], 'ro')
ax[1].plot(poses[:,0], poses[:,1], 'c-')
for i in range(len(THETA)):
x2 = 0.25*math.cos(poses[i][3]) + poses[i][0]
y2 = 0.25*math.sin(poses[i][3]) + poses[i][1]
ax[1].plot([poses[i][0], x2], [poses[i][1], y2], 'g->')
# Plot optimised trajectory
ax[2].set_title("Optimised Trajectory")
ax[2].plot(X, Y, 'ro')
ax[2].plot(X, Y, 'c-')
for i in range(len(THETA)):
x2 = 0.25*math.cos(THETA[i]) + X[i]
y2 = 0.25*math.sin(THETA[i]) + Y[i]
ax[2].plot([X[i], x2], [Y[i], y2], 'g->')
plt.show()
def optimise_lm(poses, edges, anchor, num_iters=NUM_ITERS, lamda=LM_LAMBDA, tol=TOL):
weights = information_matrix(edges, WEIGHT_ODOMETRY, WEIGHT_LOOP_CLOSURE, WEIGHT_ANCHOR)
poses_history = [poses]
errors_history = [get_error(poses, edges, weights, anchor), ]
print('Initial Guess')
draw_traj(poses[:, 0], poses[:, 1], poses[:, 2])
for _ in range(num_iters):
new_poses = poses + step_lm(poses, edges, weights, anchor, lamda)
new_error = get_error(new_poses, edges, weights, anchor)
print('ITER', _, 'ERROR', new_error)
draw_traj(new_poses[:, 0], new_poses[:, 1], new_poses[:, 2])
if len(errors_history) > 0:
if (new_error > errors_history[-1]):
lamda = lamda * 2
else:
lamda = lamda / 3
errors_history.append(new_error)
poses_history.append(new_poses)
if np.linalg.norm(new_poses - poses) < tol:
break
poses = new_poses
return poses_history, errors_history
%%time
poses_history, errors_history = optimise_lm(poses, edges, anchor, num_iters=100)
plt.plot(errors_history)
plt.show()
best = jnp.argmin(jnp.asarray(errors_history))
draw(poses_history[best][:, 0], poses_history[best][:, 1], poses_history[best][:, 2])
write_poses(poses_history[best], 'opt.g2o')
! cp 'dataset/gt.txt' 'gt.g2o'
! python3 'misc/g2o_to_kitti.py' 'gt.g2o' 'gt.kitti'
! python3 'misc/g2o_to_kitti.py' 'opt.g2o' 'opt.kitti'
! evo_rpe kitti gt.kitti opt.kitti
! evo_ape kitti gt.kitti opt.kitti
! evo_traj kitti gt.kitti opt.kitti -v --plot --plot_mode xy
! g2o_viewer opt.g2o
Give a detailed answer addressing the above questions. When I run the above code, it should follow points described above (such as plots at every 10 steps) and (When I run the above code, it should) write the optimized poses to a file named opt.g2o. As a backup, save another file opt_backup.g2o in an offline manner beforehand.
That apart, save important plots and add them here so that it can supplement your answer, for example you could add plots at crucial stages of optimization. You have to add useful metrics/plots from EVO (refer to supplementary notebook). Using EVO, the bare minimum you have to report is mean absolute pose error (ape) and mean relative pose error (rpe). However, you are encouraged to use tools like evo_traj and more and add more plots/metrics. Marks will be awarded based on overall analysis & presentation which would reflect your understanding.
Note that EVO and g2o_viewer (below) could help you in debugging.
Add answer for 2.1 here:
Installation setup is described in supplementary notebook. More details for 2.2.1 and 2.2.2 can be found in the supplementary notebook.
edges.txt¶First task is to optimize the poses of dataset you've been working with so far.
intel and sphere datasets¶You have been given two datasets in the data folder. You have to use g2o_viewer to optimize these both. You have to experiment with the options/parameters available in the GUI. More instructions in supplementary notebook. You have to experiment till you get the trajectories which look like the following:
|
|
Add images: take screenshot of the GUI of g2o_viewer after optimization for all 3 [[CP-M]] and add here. Briefly describe what you had to do (detailed answer is not expected). g2o could potentially give you close to ground truth trajectory for all 3, but if you are unable to get to close to ground truth, add the best you can get.
Comparison of trajectory made up of given poses (leftmost), after intial guess (middle) and after optimising (rightmost)
For optimisation we used Gauss Newton optimizer and achieved best results with just 5-6 iterations.
|
|
|
## Sphere
Comparison of trajectory made up of given poses (leftmost), after intial guess (middle) and after optimising (rightmost)
# <??>
<table><tr>
<td> <img src="./misc/sphere_original.png" alt="Original image of sphere" style="width: 300px;"/> </td>
<td> <img src="./misc/sphere_initialguess.png" alt="Initial guess on sphere" style="width: 300px;"/> </td>
<td> <img src="./misc/sphere_optimised.png" alt="Optimised trajectory of sphere" style="width: 300px;"/> </td>
</tr></table>
Note that it is mandatory to attempt EITHER 3 OR 4, only one of it. If you attempt both, the question which you score more will be considered and the other as bonus question.
It is encouraged for those into robotics/deep learning research to prefer 4 over 3.
* -> read information above under section "Important Information regarding Questions 3 & 4"
The current robot state is as follows: ($i$ and $k$ are interchangably used below, sorry I am too lazy to edit now 😛)

Can you derive the below equation using geometry? (Read on)
$$x_{k+1} = x_{k} + \Delta x_{(k,k+1)} \cos(\theta_k) - \Delta y_{(k,k+1)} \sin(\theta_k) \\ y_{k+1} = y_{k} + \Delta y_{(k,k+1)} \cos(\theta_k) + \Delta x_{(k,k+1)} \sin(\theta_k) \\ \theta_{k+1} = \theta_{k}+ \Delta \theta_{(k,k+1)} \tag{3}$$In other words, we want to find $\delta$'s in terms of $\Delta$'s $$\delta x = \Delta x \cos(\theta) - \Delta y \sin(\theta) \\ \delta y = \Delta y \cos(\theta) + \Delta x \sin(\theta) \tag{2}$$
where $\delta$'s are the updates in our motion model equation: $$ x_{k+1} = x_{k} + \delta x \\ y_{k + 1} = y_k + \delta y \\ \theta_{k+1} = \theta_{k} + \delta \theta \tag{1}$$
Oh yes, $\theta$ is straightforward, i.e. $\delta \theta = \Delta \theta$ but why?
Using geometry, you could just draw and insert a self-explanatory image as the answer to this question.
If you can derive it without using geometry purely using transform matrices/algebra, that is fine too. Whatever you're comfortable.
Your answer here.
* -> read information above under section "Important Information regarding Questions 3 & 4"
(Do not get intimidated, you are not expected to do a thorough research analysis for this task. A high level understanding is sufficient.)
"Past, Present & Future of SLAM: Towards the Robust Perception Age" is an exciting survey paper of 2016 which sums up, well, the "past, present & future" of SLAM. Your task is as follows:
Go through the sections "IV. LONG-TERM AUTONOMY II: SCALABILITY" & "III. LONG-TERM AUTONOMY I: ROBUSTNESS". Don't worry, you are not expected to have a deep understanding. Skip the parts which you don't understand at all. Go through it at a high level, and take a slightly closer look at "Open Problems" in these sections.
Read up common applications of deep learning for computer vision/robotics through blogs online (for example, first 4 points of this. Again, you are only expected to understand it at a high level, for example, 'semantic segmentation is an application of deep learning for computer vision which is the task of assigning a category to each of the pixels in the image'.
Firstly, summarize your understanding of the above two points.
Now, from the understanding you've gathered so far, how would you approach solving those "Open Problems"? Can these algorithms help in dealing with some of the issues you might have faced during this project? Can the deep learing based high level understanding of the world help in SLAM? In the context of long term autonomy, imagine tomorrow's world with a buddy robot R2-D2 which follows you wherever you go... Now imagine how easily the trajectory can diverge, how big the map could soon become and how the computation could easily become intractable.
Answer the above questions in the context of this project and those 2 sections of the survey paper.
Your answer here.
Check the end of your Project-1 homepage on Notion. :)